Add Cell Dissemination via Partial Message Specification#4558
Add Cell Dissemination via Partial Message Specification#4558MarcoPolo wants to merge 35 commits intoethereum:masterfrom
Conversation
specs/fulu/das-core.md
Outdated
| #### `PartialDataColumnSidecar` | ||
|
|
||
| ```python | ||
| class PartialDataColumnSidecar(Container): | ||
| # Only provided if the index can not be inferred from the Gossipsub topic | ||
| index: ColumnIndex | None | ||
| # Encoded the same as an IHAVE bitmap | ||
| cells_present_bitmap: ByteVector[NUMBER_OF_COLUMNS/8] # ceiling if NUMBER_OF_COLUMNS is not divisible by 8 | ||
| column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK] | ||
| kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK] | ||
| # The following are only provided on eager pushes. | ||
| kzg_commitments: None | List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK] | ||
| signed_block_header: None | SignedBeaconBlockHeader | ||
| kzg_commitments_inclusion_proof: None | Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH] | ||
| ``` |
There was a problem hiding this comment.
I think there's three possible ways to go with this:
- Add an Optional SSZ type, which we are currently missing (see https://ethereum-magicians.org/t/eip-6475-ssz-optional/12891)
class PartialDataColumnSidecar(Container):
index: ColumnIndex
cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_commitments: Optional[List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]]
signed_block_header: Optional[SignedBeaconBlockHeader]
kzg_commitments_inclusion_proof: Optional[Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]- Separate the data part from the kzg commitments, header and proof part
class PartialDataColumnSidecar(Container):
index: ColumnIndex
cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
class KZGCommitmentsSidecar(Container):
kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
signed_block_header: SignedBeaconBlockHeader
kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]- Have two different types for a column with and without committments (hopefully with nicer names)
class PartialDataColumnSidecar(Container):
index: ColumnIndex
cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
class PartialDataColumnSidecarWithCommitments(Container):
index: ColumnIndex
cells_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
cells: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
signed_block_header: SignedBeaconBlockHeader
kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]Also:
- changed
ByteVectortoBitlist(but could also beBitvector) NUMBER_OF_COLUMNSisn't the length/max-length of a column, should beMAX_BLOB_COMMITMENTS_PER_BLOCK- I'd just leave the
column_indexin all the time, 8 bytes doesn't seem worth worrying about imho
There was a problem hiding this comment.
Preference for 1, as the having two containers makes things a bit more annoying as you need to then define when one or the other (or both) are transmitted.
I'd just leave the column_index in all the time, 8 bytes doesn't seem worth worrying about imho
The only reason we would need it is when the subnet != column index, right? 8 bytes is small, but a couple bytes here and a couple bytes there and soon we have a whole packet of redundant data. My preference might be to just remove this field and leave it as future work to define how to derive a column index when the subnet count != column count.
There was a problem hiding this comment.
1 may seem better but the other options move the complexity from the type system (and the requisite push to add a new concept to the ssz spec) to other layers of the stack
that said, we have have Option as a non-canonical type for some time now, and it may be time to go ahead and add formal support for it
There was a problem hiding this comment.
Including the commitments here is an optimization. We could rely on just receiving them in the block body.
There was a problem hiding this comment.
Something worth paying attention to here is also how DataColumnSidedcar evolves in the gloas spec (though these changes will likely pre-date gloas). At the moment it seems like header + inclusion proof will be removed in favor of beacon_block_root only, requiring the beacon block to be seen in advance (though this still will not contain the kzg commitments), which is somewhat natural with epbs.
There was a problem hiding this comment.
We could do the same thing here, requiring the block (or a full sidecar) to have been seen before receiving a partial column (and excluding the commitments, since at this point they're still in the beacon block). Similar to what you did below but with beacon_block_root instead of the whole signed_beacon_block_header
class PartialDataColumnSidecar(Container):
index: ColumnIndex
cells_present_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
partial_column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
beacon_block_root: RootThere was a problem hiding this comment.
This is what I am referring to re: gloas specs btw https://github.com/ethereum/consensus-specs/pull/4527/files
There was a problem hiding this comment.
I've updated this to include the KZG Commitments and block header as an optional field that will be eagerly pushed. This is called the PartialDataColumnHeader, and it's essentially the DataColumnSidecar without cells and proofs. In Gloas it will change to align with the modified DataColumnSidecar as well (or maybe we can just mandate partial data columns?).
2e1e7f1 to
6113cf2
Compare
raulk
left a comment
There was a problem hiding this comment.
While this mechanistically introduces cell-level deltas, how they precisely integrate into existing PeerDAS processes isn't very clear. How about a section covering the end-to-end behaviour:
- When nodes push
DataColumnSidecars - When nodes send partial IWANTs
- When nodes send partial IHAVEs
- When nodes send
PartialDataColumnSidecars - Dependency on
getBlobsV3
8be8bff to
829184f
Compare
829184f to
8c70c5b
Compare
8c70c5b to
c88bab2
Compare
c88bab2 to
e129681
Compare
|
We've been in the process of implementing this and have run into a few issues specifically around the validation. We probably need to think about and add the validation rules to this spec. Our current thinking is to probably include the kzg-commitments into the cell data so that we can verify each cell as we get them. If we don't have them in there (as the current design), we have to wait for the block data before we can verify if the cells are accurate or not. This also entails adding some kind of cache to keep track of which peer sent want in order to penalize them at a future time if we find out the data is invalid. This then opens up a few attack vectors, where peers can send us invalid data, filling the cache preventing us getting valid data etc. We would have to be careful with the cache also, to account for periods of non-finality with a high degree of forking where there can be many valid chains. Currently it seems the complexity of handing all these edge cases might outweigh the relatively small bandwidth savings we are making by excluding the kzg commitments. Maybe there is a clear implementation path here we are missing however. cc @dknopik |
Agreed.
One thing to point out is that this is only an issue for cells you get via an eager push. Without eager push, peers only give you cells that they know you’re missing, and giving a peer this information only happens after getting the block.
The cache is only relevant in the eager push case. If the cache is full, it wouldn’t prevent you from getting valid data after you’ve received you block.
I’m unfamiliar with this scenario. Can you explain a bit more please? Do we expect multiple valid proposers as well?
No strong opinion on what we do here overall, but I’ll point out that the gloas spec also removes some of the fields and requires the validator to wait for the builder’s bid first.
My plan was to limit the eager push cache to 2-3 messages per topic. On first partial publish, we can validate the cache and down score peers appropriately. I don’t think it’s an interesting attack to poison this cache as it, worst case, costs the victim 1 RTT of delay to receive the cells, at the expense of getting down scored and pruned. That said, if we wanted to include additional information for validating eager pushes of cells before we see a blob we could send them the relevant parts (KZGCommitments, Inclusion Proof, Signed Beacon block header) only when it’s an eager push. @fradamt, do you have any extra context on why gloas removes those fields useful for validating the column before the block in the DataColumnSidecar? |
I don't see how this is viable. Cells are 2 KBs (and might get down to even 1 KB in the future), so including all KZG commitments means 48 KBs * number of blobs, which is 100% overhead as soon as we get to ~40 blobs. The bandwidth saving are very large, if we care about being able to efficiently send individual cells and not just large-ish partial columns (which imo is essential). Something else we could do is to only include the class PartialDataColumnSidecar(Container):
cells_present_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK]
partial_column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_commitments: List[KZGCommitment MAX_BLOB_COMMITMENTS_PER_BLOCK]
inclusion_proofs: List[Vector[Bytes32, SINGLE_KZG_COMMITMENT_INCLUSION_PROOF_DEPTH], MAX_BLOB_COMMITMENTS_PER_BLOCK]
```python
def verify_partial_data_column_sidecar(sidecar: PartialDataColumnSidecar) -> bool:
"""
Verify if the KZG proofs are correct.
"""
# The column index also represents the cell index
cell_indices = [i for i, b in enumerate(cells_present_bitmap) if b]
# Batch verify that the cells match the corresponding commitments and proofs
verify_cell_kzg_proof_batch(
commitments_bytes=sidecar.kzg_commitments,
cell_indices=cell_indices,
cells=sidecar.column,
proofs_bytes=sidecar.kzg_proofs,
)
def verify_data_column_sidecar_inclusion_proofs(sidecar: PartialDataColumnSidecar) -> bool:
"""
Verify if the given KZG commitments included in the given beacon block.
"""
return all(
is_valid_merkle_branch(
leaf=hash_tree_root(sidecar.kzg_commitments[i]),
branch=sidecar.kzg_commitments_inclusion_proofs[i],
depth=SINGLE_KZG_COMMITMENT_INCLUSION_PROOF_DEPTH,
index=get_subtree_index(get_generalized_index(BeaconBlockBody, "blob_kzg_commitments", i)),
root=sidecar.signed_block_header.message.body_root,
)
for i in range(len(sidecar.kzg_commitments))
)
Yes, the thinking was that in gloas the beacon block is explicitly intended to come first, the columns are not even supposed to be sent before the beacon block has reached everyone (because the structure of the slot becomes propagate beacon block -> attest -> propagate payload and blobs -> ptc vote). However, for gloas purposes we could go back to "self-verifying columns" (kzg commitments + inclusion proof) if the added complexity turns out to not be worth it (I don't think the current work on gloas devnet0 has surfaced these issues, so maybe worth bringing it up to people working on it). Whether we do that or not doesn't change things for partial messages though, including all kzg commitments + inclusion proof is fine in a column since there's only one kzg commitment per cell, it is not fine in a small partial message. |
specs/fulu/p2p-interface.md
Outdated
| ```python | ||
| class PartialDataColumnSidecar(Container): | ||
| cells_present_bitmap: Bitlist[MAX_BLOB_COMMITMENTS_PER_BLOCK] | ||
| partial_column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK] | ||
| kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK] | ||
| ``` | ||
|
|
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
Co-authored-by: fradamt <104826920+fradamt@users.noreply.github.com>
|
I've updated the spec to add an optional Without this, peers need to wait for the beacon block before being able to request cells or respond to a peer's partial message request. From observed data on Xatu and from simulations, DataColumns on mainnet propagate a faster than beacon blocks (around 100ms). This does introduce overhead that increases at a rate of 48 bytes * number, but:
It also aligns a bit more with the existing validation model of DataColumnSidecar which should make implementation a bit easier. The main downside in my opinion is that this data is duplicated on all column subnets. Maybe in the future we could consider a separate gossipsub topic for disseminating just this information. More details here: https://hackmd.io/Q9xQfN8zTtywJ75g312t8Q cc @fradamt because we were going back and forth on this. |
specs/fulu/p2p-interface.md
Outdated
| # Optional header, only sent on eager pushes | ||
| header: List[PartialDataColumnHeader, 1] |
There was a problem hiding this comment.
I assume the primary motivation of not using Union[None, PartialDataColumnHeader] is missing support in some SSZ implementations, and the fact that it is not currently used elsewhere in Ethereum?
There was a problem hiding this comment.
Why don't we always have the header though? It doesn't seem that large (336 fixed bytes + 48 * number of commitments). Even with 128 commitments which is a very high estimate for bpo3, it would be ~6kb extra data per subnet if we always include the header.
There was a problem hiding this comment.
I assume the primary motivation of not using
Union[None, PartialDataColumnHeader]is missing support in some SSZ implementations, and the fact that it is not currently used elsewhere in Ethereum?
Correct. See also this PR: #3906
There was a problem hiding this comment.
Why don't we always have the header though? It doesn't seem that large (336 fixed bytes + 48 * number of commitments). Even with 128 commitments which is a very high estimate for bpo3, it would be ~6kb extra data per subnet if we always include the header.
If we eagerly push it, the peer will always have the header when it needs it. We don't need to always include it. We don't gain anything by always including it, it is purely redundant behavior.
Here's an example exchange between two peers A and B:
+------------------+ +------------------+
| A | | B |
+------------------+ +------------------+
| |
| (1) Parts Advertise |
| Header + Bitmap_A (what A has) |
|------------------------------------------------->|
| |
| | (2) Parts Reply
| | Bitmap_B (what B has)
| | + Optional: Cells B_not_in_A
|<-------------------------------------------------|
| |
| (3) Send Missing Parts |
| Cells A_not_in_B (parts B lacks that A has) |
|------------------------------------------------->|
That hopefully makes it clear that messages 2 & 3 don't need a header.
There was a problem hiding this comment.
I understand that it is redundant. My point is that 1. its just simpler to always send it 2. its not that much data 3. this might be a premature optimization.
There was a problem hiding this comment.
Yeah I think I agree with @pawanjay176 here. I'd prefer to just include those fields in PartialDataColumnSidecar.
There was a problem hiding this comment.
- its just simpler to always send it
Marginally simpler. In my current implementation in Gossipsub, the application gets a eagerPush boolean param. It just needs to add the header if eagerPush is true. On the receiving side, you need to keep state across messages anyways (for the whole partial thing to eventually become complete), so there aren't changes needed there. I don't think it's much harder to avoid sending useless data.
The behavior of the receiver should be to ignore the header data if it already has the header anyways, as there is no point in duplicating earlier compute work.
- its not that much data
~25 commitments equates to another packet of data. We need to receive all packets of a message before we can decode it. More packets not only mean more delay before we can process the message, but also a chance for that packet to have to be retransmitted. This is a simplistic model (packet drops aren't random), but the point is that small bits of extra data aren't free.
- this might be a premature optimization.
I don't think of it as an optimization at all. It's about not doing extra useless work.
Is there something specific you think is gained by always including the header? I'm not convinced by the "simpler" argument.
There was a problem hiding this comment.
Thanks for the response.
I'll look into implementing this and get back to you!
Why not do this already? |
| `PartialDataColumnHeader` needs to be validated, then the cell and proof data. | ||
|
|
||
| Once a `PartialDataColumnHeader` is validated for a corresponding block on any | ||
| subnet (gossipsub topic), it can be used for all subnets. |
There was a problem hiding this comment.
Do we want to include anything in the spec about only pushing the partial data column header once per block ?
specs/fulu/p2p-interface.md
Outdated
| For all partial messages: | ||
|
|
||
| - _[IGNORE]_ If the received partial message contains only cell data, the node | ||
| has seen the corresponding `PartialDataColumnHeader`. |
There was a problem hiding this comment.
Wait, what if we get only cell data without ever having seen the corresponding header ? I mean we should ignore those messages right ? Not the ones which have cells and for which we've seen the header before.
There was a problem hiding this comment.
Okay, this means that we will IGNORE partial messages for which we have NOT seen the corresponding header before.
| missing. Clients SHOULD NOT send a `PartialDataColumnHeader` non-eagerly, as | ||
| this is wasted bandwidth. | ||
|
|
||
| Clients MAY choose to not eagerly push the `PartialDataColumnHeader` if it has |
There was a problem hiding this comment.
Okay so we're not enforcing that the header should be sent ONLY once per block ? What do you think is a valid use case for allowing this ?
There was a problem hiding this comment.
this is a MAY. If you have the same peer for multiple topics, you don't need to send them duplicate headers on all topics.
There was a problem hiding this comment.
Ah, I misunderstood. You're asking why we aren't requiring the header be sent AT MOST once, right?
I don't think we want this at most once per peer because then you'd have to track some peer state across topics which can be a bit annoying.
We may be able to specify at most once per peer and topic, and downscore them if they send us more. but could you explain what you see as the benefit for this? Is it that we encourage peers to avoid redundant headers? Or that we have another way of identifying buggy peers?
| compatibility with nodes that do not yet support partial messages. | ||
|
|
||
| Avoid forwarding the full `DataColumnSidecar` message to peers that requested | ||
| partial messages for that given topic. It is purely redundant information. |
There was a problem hiding this comment.
@MarcoPolo IIUC, if a peer has explicitly requested partial messages for a given topic and we support sending partials, Gossip will NEVER send that peer full messages, right ? Will doing so result in us getting down-scored by the other peer (i.e. if a peer sees both full and partial messages from us on the same topic ) ?
There was a problem hiding this comment.
if a peer has explicitly requested partial messages for a given topic and we support sending partials, Gossip will NEVER send that peer full messages, right ?
Yes
Will doing so result in us getting down-scored by the other peer (i.e. if a peer sees both full and partial messages from us on the same topic ) ?
Not currently. I'm not sure it's worth doing but maybe I'm missing something.
specs/fulu/p2p-interface.md
Outdated
|
|
||
| For verifying the `PartialDataColumnHeader` in a partial message: | ||
|
|
||
| - _[IGNORE]_ The header is the first valid header for the given block root. |
There was a problem hiding this comment.
This [IGNORE] is interesting, because it means to say "Ignore this header, but treat it as valid for the sidecar it is contained in".
Maybe we need another rule in the "For all partial messages" section above, something like If the received partial message contains a header, it must have either been seen (as valid) already or pass validation now.
Alternatively we remove _[IGNORE]_ The header is the first valid header for the given block root. and add something like The contained PartialDataColumnHeader must be valid. The result of PartialDataColumnHeader validation can be cached per header.
There was a problem hiding this comment.
Maybe we need another rule in the "For all partial messages" section above, something like If the received partial message contains a header, it must have either been seen (as valid) already or pass validation now.
I agree with this one
There was a problem hiding this comment.
Some other point we may be missing:
- Clients MAY not revalidate cells that were already considered valid.
- Do not process a message pertaining to a slot that is outside of the relevant time range.
|
Do we want to add validation to reject messages that have neither header or cells? such messages are useless. |
and recommend a flag to flip this behavior. This was discussed in the partial cells working group meeting, and there wss consensus for this change.
| | :--: | ---------------------------------------------------- | | ||
| | 00 | The peer does not have the cell and does not want it | | ||
| | 01 | The peer does not have the cell and does want it | | ||
| | 1X | The peer has the cell and is willing to provide it | |
There was a problem hiding this comment.
What should we do if it's 11, where a peer has a cell and is requesting it? Should such instances be ignored? Or maybe the peer should be downscored because it's acting buggy?
There was a problem hiding this comment.
11 and 10 are the same. The second bit is ignored, that's what the X represents. Do you have a suggested rephrasing?
There was a problem hiding this comment.
Yes, that makes sense. But I'm just saying we should not encounter 11 and if we do, the client is buggy and we should consider unpeering from them because they could be acting maliciously. Right?
There was a problem hiding this comment.
Currently it's common for clients to just set the request bitmask to all 1's. If they don't change that when they receive a cell it'll be common to see 11.
The argument for this change was that semantically a request bit doesn't make sense if the cell is available, so we should ignore the request bit if the available bit is set. Previously we treated 10 as unused and ignored, but after discussion with @sukunrt and @raulk, we opted for ignoring the request bit and treating 11 and 10 the same.
There was a problem hiding this comment.
Got it. Okay this makes sense. Thank you for explaining 🙂
specs/fulu/p2p-interface.md
Outdated
| - _[REJECT]_ The hash of the block header in `signed_block_header` MUST be the | ||
| same as the partial message's group id. |
There was a problem hiding this comment.
What is a "group id" exactly? I see the section below but it doesn't really explain it.
There was a problem hiding this comment.
It's a Gossipsub Partial message term: https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/partial-messages.md#terms-and-definitions
There was a problem hiding this comment.
Maybe slightly better:
- _[REJECT]_ The hash of the block header in `signed_block_header` MUST be the
same one identified by the partial message's group id.
There was a problem hiding this comment.
Hmm that link doesn't really help. How would one access this field? Is this like a topic id? Is this something that clients will have direct access to, or will it be abstracted away in a library? Sorry if these are dumb questions. It's just that this is unclear to me.
There was a problem hiding this comment.
The message the application receives from the library will include this, just like the topic id. Clients will have direct access to this.
Not a dumb question, I appreciate the fresh perspective. Do you have a suggestion on how to make this clearer?
There was a problem hiding this comment.
Okay that makes more sense. No suggestion on how to improve this really. I suppose implementers will already have some sense of what this is & how to access it.
specs/fulu/p2p-interface.md
Outdated
| Clients MAY choose to not eagerly push the `PartialDataColumnHeader` if it has | ||
| previously sent the header to the peer on another topic. | ||
|
|
||
| Clients SHOULD request cell data from peers after validating a |
There was a problem hiding this comment.
Just want to mention some inconsistent language for:
- cells
- cell/proof data
- cell data
I'm pretty sure you always mean "cells and proofs" in these. I would suggest improving this.
There was a problem hiding this comment.
I took a pass at this. In general I was hesitant to change "cells" to "cells and proofs" everywhere as it adds verbosity. I added some language and a check to hopefully make it clear these travel together.
to account for the version byte in the group id
In draft until the libp2p/specs work is merged and we have multiple implementations of this. Also requires TODOs to be addressed.
This PR specifies how consensus clients use Gossipsub's Partial Messages to disseminate cells rather than only full columns.
More context in this ethresearch post.